In approaching this problem, I researched formulas to measure the efficacy of advertisements from these pages: (https://www.linkedin.com/pulse/10-important-online-advertising-formulas-taeef-najib https://webrunnermedia.com/facebook-ad-metrics/ https://neilpatel.com/blog/the-5-important-metrics-of-facebook-ad-campaigns/.)
These articles revealed a few different ways of evaluating the efficacy of advertisements, which I needed to educate myself upon to comprehensively address the needs of Company XYZ.
However, eight variables make it difficult to effectively rank each ad groups, especially when it depends on whether Company XYZ is attempting to maximize revenue or minimize costs. An efficient company should attempt to maximize revenue while minizing their operating costs. Therefore, the 5 bolded variables were the ones that I looked at. CTR gives us an idea of how much traffic there is on a certain ad. CPM and CPA illustrate the costs for a thousand impressions and specific actions for an ad. The ROI and ROAS demonstrate the returns form advertisements.
From this, I attempted to determine which ad_group would have the lowest costs (low CPM, CPA) and highest returns and traffic (high ROI, ROAS, CTR.)
library(tidyverse)
library(readr)
library(ggthemes)
library(gridExtra)
library(xts)
library("forecast", lib.loc="/Library/Frameworks/R.framework/Versions/3.5/Resources/library")
adtable <- read_csv("adtable.csv")
head(adtable)
names(adtable)
## [1] "date" "shown" "clicked"
## [4] "converted" "avg_cost_per_click" "total_revenue"
## [7] "ad"
# [1] "date" "shown" "clicked" "converted"
# [5] "avg_cost_per_click" "total_revenue" "ad"
ad_metrics <- adtable %>%
group_by(ad) %>%
summarize(ctr = (mean(clicked)/mean(shown)) * 100,
cpm = (mean(avg_cost_per_click * clicked)/mean(shown))*1000 ,
cpa = (mean(avg_cost_per_click * clicked))/mean(converted),
cr = (mean(converted)/mean(clicked))*100 ,
roi = ((sum(total_revenue)-sum(avg_cost_per_click * clicked))/sum(avg_cost_per_click * clicked)) ,
roas = (sum(total_revenue)/sum(avg_cost_per_click * clicked)))
# Calculating all of our metrics.
ctr_opt <- ad_metrics %>%
arrange(desc(ctr))
cpm_opt <- ad_metrics %>%
arrange(cpm)
cpa_opt <- ad_metrics%>%
arrange(cpa)
cr_opt <- ad_metrics %>%
arrange(desc(cr))
roi_opt<- ad_metrics %>%
arrange(desc(roi))
roas_opt<- ad_metrics %>%
arrange(desc(roas))
head(ctr_opt , 10)
head(cpm_opt , 10)
head(cpa_opt , 10)
head(cr_opt , 10)
head(roi_opt , 10)
head(roas_opt , 10)
# Using head 10 because not all of the ad_groups continuously show up.
ad_metrics %>%
filter(ad == "ad_group_31")
Ad Group 31 had the highest ROI and ROAS out of any ad group. For a company, such as a food company, looking to sell its product, an ad group that yields high returns is definitely what an enterprise should look for. The CPA is also the second lowest, meaning that it is fairly cheap to run each action (selling food) because the revenue is so high. The CPM goes to support this because the CPM is the second lowest, meaning that putting out the ad is fairly cheap. Finally, the converstion rate for this ad group is the sixth highest, meaning that consumers are buying the products.
Issues arise within the high ROI and ROAS because the CTR is astromically low for Ad Group 31. It is almost last in terms of CTR, therefore, this company should implement other ads that have a higher CTR because having outreach matters for a company that would like to reach its full potential. While revenue matters, there is no point if your company does not have a large amount of traffic.
ad_metrics %>%
filter(ad == "ad_group_18")
Ad Group 18 has the highest CTR, meaning that it has the high percentage of clicks for everytime the ad is shown. Although its ROI is (-) and its ROAS is 17 times lower than that of group 31, Ad Group 18 does have the third lowest CPA (Group 31 has the second lowest CPA) and the ninth best ROAS, accounting for cheap costs for every action that 18 advertises. Ad Group 18 attempts to make up for the extremely low CTR on 31, and maintains a comparatively low CPA. Ad Group 18 and 31 both cooperate to have a high CTR and high ROI. Although the low ROI is a concern, the ROI of 31 accounts for this slightly negative ROI and dimunitive ROAS. The conversion rate on 18 is also the 8th highest, suggesting that consumers are buying the products when they click the ad.
An issue with this metric would be that it is somewhat front-heavy with the CTR, just like ad group 31. While the combined CR and CTR is optimal because a high click through rate with a high number of purchases within a click, the next ad groups should be more stable in their ROI, ROAS, and CTR.
ad_metrics %>%
filter(ad == "ad_group_2")
Ad Group 2 is not in the top 10 of the CTR (still significantly higher than that of 31), but has the 10th lowest CPM. Furthermore, ad group 2 has the highest conversion rate of all of the ad groups and a positive ROI and ROAS (second best ROI and ROAS.) This means that the revenue stream is stable while the click through rate and traffic is still moderately available. Although the costs are not as low as 9 other ad groups, ad group 2 still has relatively lower costs for every 1000 impressions. This is certainly ad group 2’s weak point: the cost becomes an issue if the revenue is not average.
Ad group 2’s main function should be to provide steady revenue with a high conversion rate for other ad groups. Furthermore, the high conversion rate suggests that ad group 2 will continue to generate revenue, and the second highest ROI and ROAS value clearly shows that ad group 2 has a high revenue or profit margin.
ad_metrics %>%
filter(ad == "ad_group_16")
Ad Group 16 is another stable ad group, although it is not necessarily the very top of many metrics. It is not in the top 10 for CTR, CPM, but it has the third lowest CPA, 4th highest CR, and third highest ROI and ROAS. It is similar to ad group 2 with a slightly higher CPM, lower CR, ROI and ROAS. However, it has a higher CTR than ad group 2.
ad_metrics %>%
filter(ad == "ad_group_13")
Ad Group 13 is another, albeit less stable, ad group. It has the lowest CPA, eigth highest CTR, second highest CR, and ninth highest ROAS. This means that this ad group does not cost a significant amount for every action on the website, had a high amount of traffic, had a substantial number of purchase, and generates more 80% of the amount you spend on it.
The fatal flaw of 13 comes in its ROI and CPM. ROI is negative, suggesting that the company has a higher average cost per click than expected. The CPM, also, is extremely high for 13 which is hopefully accounted for by the other add groups. The main benefits would be the low cost per action, but CPM can be reduced by distribution this ad group more.
A few issues with these metrics would be the lack of understanding between how many distinct viewers are reaching the sites. Furthermore, I am not entirely sure how 31 generates revenue with very little traffic. These metrics are also definitely incomplete and do not provide the full picture of the best ads, with somewhat of a main focus on reducing cost and maximizing revenue. Although this is a decent strategy to earn profit, it is not a great if Company XYZ is actually trying to advertise its name primarily before it is atetmpting to maximize revenue. None of these metrics, put simply, account for the number of people who are buying the products or what the ad actually looks like. They have a large focus on cost and revenue, which is a good thing for a company that would like to make money, but bad for a company that would also like to improve its ads in the future.
list_ads = c("ad_group_31" , "ad_group_18" , "ad_group_2" , "ad_group_16" , "ad_group_13")
all_ads = c("ad_group_1" , "ad_group_2" , "ad_group_3" , "ad_group_4" , "ad_group_5" , "ad_group_6" , "ad_group_7" , "ad_group_8" , "ad_group_9" , "ad_group_10" ,
"ad_group_11" , "ad_group_12" , "ad_group_13" , "ad_group_14" , "ad_group_15" , "ad_group_16" , "ad_group_17" , "ad_group_18" , "ad_group_19" , "ad_group_20" ,
"ad_group_21" , "ad_group_22" , "ad_group_23" , "ad_group_24" , "ad_group_25" , "ad_group_26" , "ad_group_27" , "ad_group_28" , "ad_group_29" , "ad_group_30" ,
"ad_group_31" , "ad_group_32" , "ad_group_33" , "ad_group_34" , "ad_group_35" , "ad_group_36" , "ad_group_37" , "ad_group_38" , "ad_group_39" , "ad_group_40")
ad_data <- function(ad_group_n){
d_g <- adtable %>%
filter(ad == ad_group_n)
ad_plot<- ggplot(data = d_g , aes(x = date , y = shown)) +
geom_bar(stat = "identity" , fill = "black" , width = 0.85) +
geom_smooth(method = 'lm') +
ggtitle(paste(ad_group_n)) +
scale_y_continuous(limits = c(0 , 200000) ,
expand = c(0 , 0)) +
theme_wsj()
ad_plot
}
sum_graphs <- lapply(all_ads , FUN = ad_data)
sum_graphs
This does not really help provide a prediction, rather just a trendline on each of these w/o the label for the trend line label.
frcast_r <- function(ad_group_n){
each_data <- function(ad_group_n){
adtable %>%
filter(ad == ad_group_n) %>%
group_by(ad)
}
x = xts(x = each_data(ad_group_n)$shown, order.by= each_data(ad_group_n)$date) # Ordering by date, y-axis is the shown
x.ts = ts(x, freq=365, start=c(2015, 270) , end = c(2015 , 349)) # 349 is 12.15.2015
plot(forecast(ets(x.ts), 10) , main = ad_group_n)
as.numeric(forecast(ets(x.ts), 10)$mean)
}
lapply(all_ads , FUN = frcast_r)
## [[1]]
## [1] 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73
## [8] 68276.73 68276.73 68276.73
##
## [[2]]
## [1] 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43
## [8] 51081.43 51081.43 51081.43
##
## [[3]]
## [1] 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6
## [8] 167377.6 167377.6 167377.6
##
## [[4]]
## [1] 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88
## [8] 90714.88 90714.88 90714.88
##
## [[5]]
## [1] 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74
## [8] 50732.74 50732.74 50732.74
##
## [[6]]
## [1] 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39
## [8] 40634.39 40634.39 40634.39
##
## [[7]]
## [1] 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02
## [8] 57999.02 57999.02 57999.02
##
## [[8]]
## [1] 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67
## [8] 45579.67 45579.67 45579.67
##
## [[9]]
## [1] 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1
## [8] 123588.1 123588.1 123588.1
##
## [[10]]
## [1] 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7
## [8] 124216.7 124216.7 124216.7
##
## [[11]]
## [1] 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33
## [8] 18541.33 18541.33 18541.33
##
## [[12]]
## [1] 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29
## [8] 29435.29 29435.29 29435.29
##
## [[13]]
## [1] 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9
## [8] 155385.9 155385.9 155385.9
##
## [[14]]
## [1] 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717
## [8] 8533.717 8533.717 8533.717
##
## [[15]]
## [1] 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15
## [8] 15523.15 15523.15 15523.15
##
## [[16]]
## [1] 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55
## [8] 29622.55 29622.55 29622.55
##
## [[17]]
## [1] 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9
## [8] 139881.9 139881.9 139881.9
##
## [[18]]
## [1] 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77
## [8] 90179.77 90179.77 90179.77
##
## [[19]]
## [1] 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2
## [9] 19789.2 19789.2
##
## [[20]]
## [1] 121959.1 121959.7 121960.1 121960.6 121960.9 121961.2 121961.5
## [8] 121961.7 121961.9 121962.0
##
## [[21]]
## [1] 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68
## [8] 26463.68 26463.68 26463.68
##
## [[22]]
## [1] 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27
## [8] 25813.27 25813.27 25813.27
##
## [[23]]
## [1] 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77
## [8] 47419.77 47419.77 47419.77
##
## [[24]]
## [1] 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38
## [8] 37188.38 37188.38 37188.38
##
## [[25]]
## [1] 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7
## [8] 177830.7 177830.7 177830.7
##
## [[26]]
## [1] 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84
## [8] 74354.84 74354.84 74354.84
##
## [[27]]
## [1] 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41
## [8] 65666.41 65666.41 65666.41
##
## [[28]]
## [1] 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3
## [9] 20539.3 20539.3
##
## [[29]]
## [1] 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66
## [8] 20641.66 20641.66 20641.66
##
## [[30]]
## [1] 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3
## [8] 117893.3 117893.3 117893.3
##
## [[31]]
## [1] 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4
## [8] 131263.4 131263.4 131263.4
##
## [[32]]
## [1] 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17
## [8] 37584.17 37584.17 37584.17
##
## [[33]]
## [1] 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42
## [8] 15904.42 15904.42 15904.42
##
## [[34]]
## [1] 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92
## [8] 35928.92 35928.92 35928.92
##
## [[35]]
## [1] 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97
## [8] 60314.97 60314.97 60314.97
##
## [[36]]
## [1] 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32
## [8] 69095.32 69095.32 69095.32
##
## [[37]]
## [1] 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97
## [8] 75000.97 75000.97 75000.97
##
## [[38]]
## [1] 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4
## [8] 177591.4 177591.4 177591.4
##
## [[39]]
## [1] 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35
## [8] 25071.35 25071.35 25071.35
##
## [[40]]
## [1] 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07
## [8] 76809.07 76809.07 76809.07
On a site note, I am certain that function such as predict would have greatly helped me here. I did attempt to use a linear model with the geom_smooth(method = 'lm'), but I have no idea on how I can show the trendline with ggplot. I failed to do this, which is why I can only predict that most of the ads will stay the same or steadily increase in the shown variable. I most certainly have more to learn in future SDS classes or R Documentation to understand how I can actually do this.
I did try to use the forecast package and lubridate and accomplish a similar goal. With that being said, this also gives us a range. I learned lubridate and forecast today, so my skills with it are a bit rough. I only have a range for the the ad groups, so this a rough estimate.
Anyway, let’s take a look at the results.
[[1]]
[1] 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73 68276.73
[[2]]
[1] 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43 51081.43
[[3]]
[1] 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6 167377.6
[[4]]
[1] 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88 90714.88
[[5]]
[1] 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74 50732.74
[[6]]
[1] 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39 40634.39
[[7]]
[1] 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02 57999.02
[[8]]
[1] 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67 45579.67
[[9]]
[1] 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1 123588.1
[[10]]
[1] 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7 124216.7
[[11]]
[1] 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33 18541.33
[[12]]
[1] 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29 29435.29
[[13]]
[1] 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9 155385.9
[[14]]
[1] 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717 8533.717
[[15]]
[1] 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15 15523.15
[[16]]
[1] 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55 29622.55
[[17]]
[1] 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9 139881.9
[[18]]
[1] 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77 90179.77
[[19]]
[1] 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2 19789.2
[[20]]
[1] 121959.1 121959.7 121960.1 121960.6 121960.9 121961.2 121961.5 121961.7 121961.9 121962.0
[[21]]
[1] 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68 26463.68
[[22]]
[1] 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27 25813.27
[[23]]
[1] 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77 47419.77
[[24]]
[1] 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38 37188.38
[[25]]
[1] 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7 177830.7
[[26]]
[1] 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84 74354.84
[[27]]
[1] 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41 65666.41
[[28]]
[1] 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3 20539.3
[[29]]
[1] 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66 20641.66
[[30]]
[1] 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3 117893.3
[[31]]
[1] 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4 131263.4
[[32]]
[1] 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17 37584.17
[[33]]
[1] 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42 15904.42
[[34]]
[1] 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92 35928.92
[[35]]
[1] 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97 60314.97
[[36]]
[1] 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32 69095.32
[[37]]
[1] 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97 75000.97
[[38]]
[1] 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4 177591.4
[[39]]
[1] 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35 25071.35
[[40]]
[1] 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07 76809.07
Cluster ads into 3 groups: avg_cost_per_click going up, ones going down, and steady
cc_s <- function(ad_group_n){
c_c <- adtable %>%
filter(ad == ad_group_n)
#summarize(last_shown = (max(date)) , first_shown = min(date), time_span = parse_number(last_shown - first_shown))
cc_plot <- ggplot(data = c_c , aes(x = date , y = avg_cost_per_click)) +
geom_bar(stat = "identity" , fill = "black" , width = 0.85) +
ggtitle(paste(ad_group_n)) +
geom_smooth(method = "lm" )
#scale_y_continuous(limits = c(0 , 4.5) ,
# expand = c(0 , 0)) +
theme_wsj()
cc_plot
}
p_cc <- lapply(all_ads , FUN = cc_s)
p_cc
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
##
## [[7]]
##
## [[8]]
##
## [[9]]
##
## [[10]]
##
## [[11]]
##
## [[12]]
##
## [[13]]
##
## [[14]]
##
## [[15]]
##
## [[16]]
##
## [[17]]
##
## [[18]]
##
## [[19]]
##
## [[20]]
##
## [[21]]
##
## [[22]]
##
## [[23]]
##
## [[24]]
##
## [[25]]
##
## [[26]]
##
## [[27]]
##
## [[28]]
##
## [[29]]
##
## [[30]]
##
## [[31]]
##
## [[32]]
##
## [[33]]
##
## [[34]]
##
## [[35]]
##
## [[36]]
##
## [[37]]
##
## [[38]]
##
## [[39]]
##
## [[40]]
# Need to subtract avg_cost at max date from min date.
# We need to the final avg_cost and the initial avg_cost, meaning that we need to take the date and original cost as functions. Or there must be some statistical way to determien increasing/decreasing..
Ad Groups 1, 3, 5, 9, 15, 18, 20, 25, 26, 31, 32, 37, 39, and 40.
Ad Group 4, 6, 7, 8, 11, 12, 14, 16, 19, 22, 23, 24, 27, 28, 29, 33, 35, 36, and 38.
Ad Groups 2, 10, 13, 17, 21, 30, and 34.
As a side note: once again, I should have used some form of trend line to then split up the graphs by their trendline. Being without this knowledge, I was unable to do so, which makes this so much less accurate. This reaffirms that I need to learn more R code. Thank you for your time, and I sincerely hope that I did not waste your time.
Exercise 1, in my opinion, was done well. Exercise 2 has the correct idea, but maybe it was not you were looking for. Exercise 3 is fairly accurate, but using code would be optimal. I believe that finding the trendline would have been much better.